# Sparse Mixture of Experts
Mixtral 8x22B V0.1
Apache-2.0
Mixtral-8x22B is a pretrained generative sparse mixture of experts model supporting multiple languages.
Large Language Model
Transformers Supports Multiple Languages

M
mistralai
1,032
220
Mixtral 8x22B V0.1
Apache-2.0
Mixtral-8x22B is a pre-trained generative sparse mixture of experts model released by Mistral AI, supporting multilingual processing.
Large Language Model
Transformers Supports Multiple Languages

M
v2ray
33
142
Mixtral 8x7B Instruct V0.1 HF
Apache-2.0
Mixtral-8x7B is a pre-trained generative sparse mixture of experts large language model that outperforms Llama 2 70B on most benchmarks.
Large Language Model
Transformers Supports Multiple Languages

M
LoneStriker
45
4
Mixtral 8x7B Instruct V0.1
Apache-2.0
Mixtral-8x7B is a pre-trained generative sparse mixture of experts model that outperforms Llama 2 70B on most benchmarks.
Large Language Model
Transformers Supports Multiple Languages

M
mistralai
505.97k
4,397
Mixtral 8x7B V0.1
Apache-2.0
Mixtral-8x7B is a pre-trained generative sparse mixture of experts model that outperforms Llama 2 70B on most benchmarks.
Large Language Model
Transformers Supports Multiple Languages

M
mistralai
42.78k
1,709
Featured Recommended AI Models